Goto

Collaborating Authors

 student language model




VidLanKD: Improving Language Understanding via Video-Distilled Knowledge Transfer

Neural Information Processing Systems

Since visual perception can give rich information beyond text descriptions for world understanding, there has been increasing interest in leveraging visual grounding for language learning. Recently, vokenization (Tan and Bansal, 2020) has attracted attention by using the predictions of a text-to-image retrieval model as labels for language model supervision. Despite its success, the method suffers from approximation error of using finite image labels and the lack of vocabulary diversity of a small image-text dataset. To overcome these limitations, we present VidLanKD, a video-language knowledge distillation method for improving language understanding. We train a multi-modal teacher model on a video-text dataset, and then transfer its knowledge to a student language model with a text dataset.


Supplementary Materials for V

Neural Information Processing Systems

In this appendix, we start with describing the experimental setup details (Sec. Each sampled video from 30K has on average around 100 clips. To investigate whether the additional MLP distillation head (Sec.3.3 in the main paper) affects the As we see in Table 1, for both NST and CRD, the performance drops on all downstream tasks when distillation heads are removed. Table 1: Ablation results of additional distillation heads for student language models.SST -2 QNLI QQP MNLI BERT In Table 2, we compare the accuracy of text-only pretraining, image-based KD and video-based KD on PIQA. KD further improves the results.



VidLanKD: Improving Language Understanding via Video-Distilled Knowledge Transfer

Neural Information Processing Systems

Since visual perception can give rich information beyond text descriptions for world understanding, there has been increasing interest in leveraging visual grounding for language learning. Recently, vokenization (Tan and Bansal, 2020) has attracted attention by using the predictions of a text-to-image retrieval model as labels for language model supervision. Despite its success, the method suffers from approximation error of using finite image labels and the lack of vocabulary diversity of a small image-text dataset. To overcome these limitations, we present VidLanKD, a video-language knowledge distillation method for improving language understanding. We train a multi-modal teacher model on a video-text dataset, and then transfer its knowledge to a student language model with a text dataset.


VidLanKD: Improving Language Understanding via Video-Distilled Knowledge Transfer

Tang, Zineng, Cho, Jaemin, Tan, Hao, Bansal, Mohit

arXiv.org Artificial Intelligence

Since visual perception can give rich information beyond text descriptions for world understanding, there has been increasing interest in leveraging visual grounding for language learning. Recently, vokenization has attracted attention by using the predictions of a text-to-image retrieval model as labels for language model supervision. Despite its success, the method suffers from approximation error of using finite image labels and the lack of vocabulary diversity of a small image-text dataset. To overcome these limitations, we present VidLanKD, a video-language knowledge distillation method for improving language understanding. We train a multi-modal teacher model on a video-text dataset, and then transfer its knowledge to a student language model with a text dataset. To avoid approximation error, we propose to use different knowledge distillation objectives. In addition, the use of a large-scale video-text dataset helps learn diverse and richer vocabularies. In our experiments, VidLanKD achieves consistent improvements over text-only language models and vokenization models, on several downstream language understanding tasks including GLUE, SQuAD, and SWAG. We also demonstrate the improved world knowledge, physical reasoning, and temporal reasoning capabilities of our model by evaluating on the GLUE-diagnostics, PIQA, and TRACIE datasets. Lastly, we present comprehensive ablation studies as well as visualizations of the learned text-to-video grounding results of our teacher and student language models. Our code and models are available at: https://github.com/zinengtang/VidLanKD